Probabilistic Dimensionality Reduction via Structure Learning
نویسندگان
چکیده
منابع مشابه
Probabilistic Dimensionality Reduction via Structure Learning
We propose a novel probabilistic dimensionality reduction framework that can naturally integrate the generative model and the locality information of data. Based on this framework, we present a new model, which is able to learn a smooth skeleton of embedding points in a low-dimensional space from high-dimensional noisy data. The formulation of the new model can be equivalently interpreted as tw...
متن کاملTransfer Learning via Dimensionality Reduction
Transfer learning addresses the problem of how to utilize plenty of labeled data in a source domain to solve related but different problems in a target domain, even when the training and testing problems have different distributions or features. In this paper, we consider transfer learning via dimensionality reduction. To solve this problem, we learn a low-dimensional latent feature space where...
متن کاملProbabilistic Spectral Dimensionality Reduction
We introduce a new perspective on spectral dimensionality reduction which views these methods as Gaussian random fields (GRFs). Our unifying perspective is based on the maximum entropy principle which is in turn inspired by maximum variance unfolding. The resulting probabilistic models are based on GRFs. The resulting model is a nonlinear generalization of principal component analysis. We show ...
متن کاملDimensionality reduction via discretization
The existence of numeric data and large amounts of records in a database pose a challenging task to explicit concepts extraction from the raw data. This paper introduces a method that reduces data vertically and horizontally, keeps the discriminating power of the original data, and paves the way for extracting concepts. The method is based on discretization (vertical reduction) and feature sele...
متن کاملDimensionality Reduction and Learning
The theme of these two lectures is that for L2 methods we need not work in infinite dimensional spaces. In particular, we can unadaptively find and work in a low dimensional space and achieve about as good results. These results question the need for explicitly working in infinite (or high) dimensional spaces for L2 methods. In contrast, for sparsity based methods (including L1 regularization),...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Pattern Analysis and Machine Intelligence
سال: 2019
ISSN: 0162-8828,2160-9292,1939-3539
DOI: 10.1109/tpami.2017.2785402